Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
IEEE Trans Med Imaging ; 42(4): 1225-1236, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36449590

RESUMO

Accurate bowel segmentation is essential for diagnosis and treatment of bowel cancers. Unfortunately, segmenting the entire bowel in CT images is quite challenging due to unclear boundary, large shape, size, and appearance variations, as well as diverse filling status within the bowel. In this paper, we present a novel two-stage framework, named BowelNet, to handle the challenging task of bowel segmentation in CT images, with two stages of 1) jointly localizing all types of the bowel, and 2) finely segmenting each type of the bowel. Specifically, in the first stage, we learn a unified localization network from both partially- and fully-labeled CT images to robustly detect all types of the bowel. To better capture unclear bowel boundary and learn complex bowel shapes, in the second stage, we propose to jointly learn semantic information (i.e., bowel segmentation mask) and geometric representations (i.e., bowel boundary and bowel skeleton) for fine bowel segmentation in a multi-task learning scheme. Moreover, we further propose to learn a meta segmentation network via pseudo labels to improve segmentation accuracy. By evaluating on a large abdominal CT dataset, our proposed BowelNet method can achieve Dice scores of 0.764, 0.848, 0.835, 0.774, and 0.824 in segmenting the duodenum, jejunum-ileum, colon, sigmoid, and rectum, respectively. These results demonstrate the effectiveness of our proposed BowelNet framework in segmenting the entire bowel from CT images.


Assuntos
Colo , Semântica , Pelve , Aprendizado de Máquina , Tomografia Computadorizada por Raios X
2.
Nat Commun ; 13(1): 6566, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-36323677

RESUMO

In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Tomografia Computadorizada por Raios X , Órgãos em Risco , Neoplasias/radioterapia , Processamento de Imagem Assistida por Computador
3.
Front Neuroinform ; 16: 937891, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36120083

RESUMO

Objective: To explore the feasibility of a deep learning three-dimensional (3D) V-Net convolutional neural network to construct high-resolution computed tomography (HRCT)-based auditory ossicle structure recognition and segmentation models. Methods: The temporal bone HRCT images of 158 patients were collected retrospectively, and the malleus, incus, and stapes were manually segmented. The 3D V-Net and U-Net convolutional neural networks were selected as the deep learning methods for segmenting the auditory ossicles. The temporal bone images were randomized into a training set (126 cases), a test set (16 cases), and a validation set (16 cases). Taking the results of manual segmentation as a control, the segmentation results of each model were compared. Results: The Dice similarity coefficients (DSCs) of the malleus, incus, and stapes, which were automatically segmented with a 3D V-Net convolutional neural network and manually segmented from the HRCT images, were 0.920 ± 0.014, 0.925 ± 0.014, and 0.835 ± 0.035, respectively. The average surface distance (ASD) was 0.257 ± 0.054, 0.236 ± 0.047, and 0.258 ± 0.077, respectively. The Hausdorff distance (HD) 95 was 1.016 ± 0.080, 1.000 ± 0.000, and 1.027 ± 0.102, respectively. The DSCs of the malleus, incus, and stapes, which were automatically segmented using the 3D U-Net convolutional neural network and manually segmented from the HRCT images, were 0.876 ± 0.025, 0.889 ± 0.023, and 0.758 ± 0.044, respectively. The ASD was 0.439 ± 0.208, 0.361 ± 0.077, and 0.433 ± 0.108, respectively. The HD 95 was 1.361 ± 0.872, 1.174 ± 0.350, and 1.455 ± 0.618, respectively. As these results demonstrated, there was a statistically significant difference between the two groups (P < 0.001). Conclusion: The 3D V-Net convolutional neural network yielded automatic recognition and segmentation of the auditory ossicles and produced similar accuracy to manual segmentation results.

4.
BMC Med Imaging ; 22(1): 123, 2022 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-35810273

RESUMO

OBJECTIVES: Accurate contouring of the clinical target volume (CTV) is a key element of radiotherapy in cervical cancer. We validated a novel deep learning (DL)-based auto-segmentation algorithm for CTVs in cervical cancer called the three-channel adaptive auto-segmentation network (TCAS). METHODS: A total of 107 cases were collected and contoured by senior radiation oncologists (ROs). Each case consisted of the following: (1) contrast-enhanced CT scan for positioning, (2) the related CTV, (3) multiple plain CT scans during treatment and (4) the related CTV. After registration between (1) and (3) for the same patient, the aligned image and CTV were generated. Method 1 is rigid registration, method 2 is deformable registration, and the aligned CTV is seen as the result. Method 3 is rigid registration and TCAS, method 4 is deformable registration and TCAS, and the result is generated by a DL-based method. RESULTS: From the 107 cases, 15 pairs were selected as the test set. The dice similarity coefficient (DSC) of method 1 was 0.8155 ± 0.0368; the DSC of method 2 was 0.8277 ± 0.0315; the DSCs of method 3 and 4 were 0.8914 ± 0.0294 and 0.8921 ± 0.0231, respectively. The mean surface distance and Hausdorff distance of methods 3 and 4 were markedly better than those of method 1 and 2. CONCLUSIONS: The TCAS achieved comparable accuracy to the manual delineation performed by senior ROs and was significantly better than direct registration.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Espécies Reativas de Oxigênio , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
5.
Ann Surg Oncol ; 2022 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-35286532

RESUMO

BACKGROUND: Exploring the genomic landscape of hepatocellular carcinoma (HCC) provides clues for therapeutic decision-making. Phosphatidylinositol-3 kinase (PI3K) signaling is one of the key pathways regulating HCC aggressiveness, and its genomic alterations have been correlated with sorafenib response. In this study, we aimed to predict somatic mutations of the PI3K signaling pathway in HCC samples through machine-learning-based radiomic analysis. METHODS: HCC patients who underwent next-generation sequencing and preoperative contrast-enhanced CT were recruited from West China Hospital and The Cancer Genome Atlas for model training and validation, respectively. Radiomic features were extracted from volumes of interest (VOIs) covering the tumor (VOItumor) and peritumoral areas (5 mm [VOI5mm], 10 mm [VOI10mm], and 20 mm [VOI20mm] from tumor margin). Factor analysis, logistic regression analysis, least absolute shrinkage and selection operator, and random forest analysis were applied for feature selection and model construction. Model performance was characterized based on the area under the receiver operating characteristic curve (AUC). RESULTS: A total of 132 HCC patients (mean age: 61.1 ± 14.7 years; 108 men) were enrolled. In the training set, the AUCs of radiomic signatures based on single CT phases were moderate (AUC 0.694-0.771). In the external validation set, the radiomic signature based on VOI10mm in arterial phase demonstrated the highest AUC (0.733) among all models. No improvement in model performance was achieved after adding the tumor radiomic features or manually assessed qualitative features. CONCLUSIONS: Machine-learning-based radiomic analysis had potential for characterizing alterations of PI3K signaling in HCC and could help identify potential candidates for sorafenib treatment.

6.
J Appl Clin Med Phys ; 23(2): e13470, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34807501

RESUMO

OBJECTIVES: Because radiotherapy is indispensible for treating cervical cancer, it is critical to accurately and efficiently delineate the radiation targets. We evaluated a deep learning (DL)-based auto-segmentation algorithm for automatic contouring of clinical target volumes (CTVs) in cervical cancers. METHODS: Computed tomography (CT) datasets from 535 cervical cancers treated with definitive or postoperative radiotherapy were collected. A DL tool based on VB-Net was developed to delineate CTVs of the pelvic lymph drainage area (dCTV1) and parametrial area (dCTV2) in the definitive radiotherapy group. The training/validation/test number is 157/20/23. CTV of the pelvic lymph drainage area (pCTV1) was delineated in the postoperative radiotherapy group. The training/validation/test number is 272/30/33. Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance (HD) were used to evaluate the contouring accuracy. Contouring times were recorded for efficiency comparison. RESULTS: The mean DSC, MSD, and HD values for our DL-based tool were 0.88/1.32 mm/21.60 mm for dCTV1, 0.70/2.42 mm/22.44 mm for dCTV2, and 0.86/1.15 mm/20.78 mm for pCTV1. Only minor modifications were needed for 63.5% of auto-segmentations to meet the clinical requirements. The contouring accuracy of the DL-based tool was comparable to that of senior radiation oncologists and was superior to that of junior/intermediate radiation oncologists. Additionally, DL assistance improved the performance of junior radiation oncologists for dCTV2 and pCTV1 contouring (mean DSC increases: 0.20 for dCTV2, 0.03 for pCTV1; mean contouring time decrease: 9.8 min for dCTV2, 28.9 min for pCTV1). CONCLUSIONS: DL-based auto-segmentation improves CTV contouring accuracy, reduces contouring time, and improves clinical efficiency for treating cervical cancer.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
7.
BMC Med Imaging ; 21(1): 57, 2021 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-33757431

RESUMO

BACKGROUND: Spatial and temporal lung infection distributions of coronavirus disease 2019 (COVID-19) and their changes could reveal important patterns to better understand the disease and its time course. This paper presents a pipeline to analyze statistically these patterns by automatically segmenting the infection regions and registering them onto a common template. METHODS: A VB-Net is designed to automatically segment infection regions in CT images. After training and validating the model, we segmented all the CT images in the study. The segmentation results are then warped onto a pre-defined template CT image using deformable registration based on lung fields. Then, the spatial distributions of infection regions and those during the course of the disease are calculated at the voxel level. Visualization and quantitative comparison can be performed between different groups. We compared the distribution maps between COVID-19 and community acquired pneumonia (CAP), between severe and critical COVID-19, and across the time course of the disease. RESULTS: For the performance of infection segmentation, comparing the segmentation results with manually annotated ground-truth, the average Dice is 91.6% ± 10.0%, which is close to the inter-rater difference between two radiologists (the Dice is 96.1% ± 3.5%). The distribution map of infection regions shows that high probability regions are in the peripheral subpleural (up to 35.1% in probability). COVID-19 GGO lesions are more widely spread than consolidations, and the latter are located more peripherally. Onset images of severe COVID-19 (inpatients) show similar lesion distributions but with smaller areas of significant difference in the right lower lobe compared to critical COVID-19 (intensive care unit patients). About the disease course, critical COVID-19 patients showed four subsequent patterns (progression, absorption, enlargement, and further absorption) in our collected dataset, with remarkable concurrent HU patterns for GGO and consolidations. CONCLUSIONS: By segmenting the infection regions with a VB-Net and registering all the CT images and the segmentation results onto a template, spatial distribution patterns of infections can be computed automatically. The algorithm provides an effective tool to visualize and quantify the spatial patterns of lung infection diseases and their changes during the disease course. Our results demonstrate different patterns between COVID-19 and CAP, between severe and critical COVID-19, as well as four subsequent disease course patterns of the severe COVID-19 patients studied, with remarkable concurrent HU patterns for GGO and consolidations.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Progressão da Doença , Humanos , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
8.
Med Phys ; 48(4): 1633-1645, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33225476

RESUMO

OBJECTIVE: Computed tomography (CT) provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study was to develop a deep learning (DL)-based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans. METHODS: The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each training case. To evaluate the performance of the DL-based segmentation system, three metrics, that is, Dice similarity coefficient, the differences of volume, and percentage of infection (POI), are calculated between automatic and manual segmentations on the validation set. Then, a clinical study on severity prediction is reported based on the quantitative infection assessment. RESULTS: The proposed DL-based segmentation system yielded Dice similarity coefficients of 91.6% ± 10.0% between automatic and manual segmentations, and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, compared with the cases with fully manual delineation that often takes hours, the proposed HIMI training strategy can dramatically reduce the delineation time to 4 min after three iterations of model updating. Besides, the best accuracy of severity prediction was 73.4% ± 1.3% when the mass of infection (MOI) of multiple lung lobes and bronchopulmonary segments were used as features for severity prediction, indicating the potential clinical application of our quantification technique on severity prediction. CONCLUSIONS: A DL-based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID-19 patients. Quantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos
9.
Med Image Anal ; 68: 101910, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33285483

RESUMO

The coronavirus disease, named COVID-19, has become the largest global public health crisis since it started in early 2020. CT imaging has been used as a complementary tool to assist early screening, especially for the rapid identification of COVID-19 cases from community acquired pneumonia (CAP) cases. The main challenge in early screening is how to model the confusing cases in the COVID-19 and CAP groups, with very similar clinical manifestations and imaging features. To tackle this challenge, we propose an Uncertainty Vertex-weighted Hypergraph Learning (UVHL) method to identify COVID-19 from CAP using CT images. In particular, multiple types of features (including regional features and radiomics features) are first extracted from CT image for each case. Then, the relationship among different cases is formulated by a hypergraph structure, with each case represented as a vertex in the hypergraph. The uncertainty of each vertex is further computed with an uncertainty score measurement and used as a weight in the hypergraph. Finally, a learning process of the vertex-weighted hypergraph is used to predict whether a new testing case belongs to COVID-19 or not. Experiments on a large multi-center pneumonia dataset, consisting of 2148 COVID-19 cases and 1182 CAP cases from five hospitals, are conducted to evaluate the prediction accuracy of the proposed method. Results demonstrate the effectiveness and robustness of our proposed method on the identification of COVID-19 in comparison to state-of-the-art methods.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Diagnóstico por Computador/métodos , Aprendizado de Máquina , Pneumonia Viral/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , China , Infecções Comunitárias Adquiridas/virologia , Conjuntos de Dados como Assunto , Diagnóstico Diferencial , Humanos , Pneumonia Viral/virologia , SARS-CoV-2
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...